Do arbitrary input–output mappings in parallel distributed processing networks require localist coding?
نویسندگان
چکیده
منابع مشابه
Combining Distributed and Localist Computations in Real-Time Networks
In order to benefit from the advantages of localist coding, neural models that feature winner-take-all representations at the top level of a network hierarchy must still solve the computational problems inherent in distributed representations at the lower levels. By carefully defining terms, demonstrating strong links among a variety of seemingly disparate formalisms, and debunking purported sh...
متن کاملRobust Distributed Source Coding with Arbitrary Number of Encoders and Practical Code Design Technique
The robustness property can be added to DSC system at the expense of reducing performance, i.e., increasing the sum-rate. The aim of designing robust DSC schemes is to trade off between system robustness and compression efficiency. In this paper, after deriving an inner bound on the rate–distortion region for the quadratic Gaussian MDC based RDSC system with two encoders, the structure of...
متن کاملScaling of global inputoutput networks
Examining scaling patterns of networks can help understand how structural features relate to the behavior of the networks. Input–output networks consist of industries as nodes and inter-industrial exchanges of products as links. Previous studies consider limitedmeasures for node strengths and link weights, and also ignore the impact of dataset choice. We consider a comprehensive set of indicato...
متن کاملDistributed vs. Localist Representations 1
One of the central claims associated with the parallel distributed processing approach popularized by D. E. Rumelhart, J. L. McClelland and the PDP Research Group is that knowledge is coded in a distributed fashion. Localist representations within this perspective are widely rejected. It is important to note, however, that connectionist networks can learn localist representations and many conne...
متن کاملLocalist Attractor Networks
Attractor networks, which map an input space to a discrete output space, are useful for pattern completion--cleaning up noisy or missing input features. However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce spurious attractors and ill-conditioned attractor basins. These difficulties occur because each connection...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Language, Cognition and Neuroscience
سال: 2016
ISSN: 2327-3798,2327-3801
DOI: 10.1080/23273798.2016.1256490